427 research outputs found
Reducing the Effects of Detrimental Instances
Not all instances in a data set are equally beneficial for inducing a model
of the data. Some instances (such as outliers or noise) can be detrimental.
However, at least initially, the instances in a data set are generally
considered equally in machine learning algorithms. Many current approaches for
handling noisy and detrimental instances make a binary decision about whether
an instance is detrimental or not. In this paper, we 1) extend this paradigm by
weighting the instances on a continuous scale and 2) present a methodology for
measuring how detrimental an instance may be for inducing a model of the data.
We call our method of identifying and weighting detrimental instances reduced
detrimental instance learning (RDIL). We examine RIDL on a set of 54 data sets
and 5 learning algorithms and compare RIDL with other weighting and filtering
approaches. RDIL is especially useful for learning algorithms where every
instance can affect the classification boundary and the training instances are
considered individually, such as multilayer perceptrons trained with
backpropagation (MLPs). Our results also suggest that a more accurate estimate
of which instances are detrimental can have a significant positive impact for
handling them.Comment: 6 pages, 5 tables, 2 figures. arXiv admin note: substantial text
overlap with arXiv:1403.189
Recommending Learning Algorithms and Their Associated Hyperparameters
The success of machine learning on a given task dependson, among other
things, which learning algorithm is selected and its associated
hyperparameters. Selecting an appropriate learning algorithm and setting its
hyperparameters for a given data set can be a challenging task, especially for
users who are not experts in machine learning. Previous work has examined using
meta-features to predict which learning algorithm and hyperparameters should be
used. However, choosing a set of meta-features that are predictive of algorithm
performance is difficult. Here, we propose to apply collaborative filtering
techniques to learning algorithm and hyperparameter selection, and find that
doing so avoids determining which meta-features to use and outperforms
traditional meta-learning approaches in many cases.Comment: Short paper--2 pages, 2 table
Missing Value Imputation With Unsupervised Backpropagation
Many data mining and data analysis techniques operate on dense matrices or
complete tables of data. Real-world data sets, however, often contain unknown
values. Even many classification algorithms that are designed to operate with
missing values still exhibit deteriorated accuracy. One approach to handling
missing values is to fill in (impute) the missing values. In this paper, we
present a technique for unsupervised learning called Unsupervised
Backpropagation (UBP), which trains a multi-layer perceptron to fit to the
manifold sampled by a set of observed point-vectors. We evaluate UBP with the
task of imputing missing values in datasets, and show that UBP is able to
predict missing values with significantly lower sum-squared error than other
collaborative filtering and imputation techniques. We also demonstrate with 24
datasets and 9 supervised learning algorithms that classification accuracy is
usually higher when randomly-withheld values are imputed using UBP, rather than
with other methods
An Easy to Use Repository for Comparing and Improving Machine Learning Algorithm Usage
The results from most machine learning experiments are used for a specific
purpose and then discarded. This results in a significant loss of information
and requires rerunning experiments to compare learning algorithms. This also
requires implementation of another algorithm for comparison, that may not
always be correctly implemented. By storing the results from previous
experiments, machine learning algorithms can be compared easily and the
knowledge gained from them can be used to improve their performance. The
purpose of this work is to provide easy access to previous experimental results
for learning and comparison. These stored results are comprehensive -- storing
the prediction for each test instance as well as the learning algorithm,
hyperparameters, and training set that were used. Previous results are
particularly important for meta-learning, which, in a broad sense, is the
process of learning from previous machine learning results such that the
learning process is improved. While other experiment databases do exist, one of
our focuses is on easy access to the data. We provide meta-learning data sets
that are ready to be downloaded for meta-learning experiments. In addition,
queries to the underlying database can be made if specific information is
desired. We also differ from previous experiment databases in that our
databases is designed at the instance level, where an instance is an example in
a data set. We store the predictions of a learning algorithm trained on a
specific training set for each instance in the test set. Data set level
information can then be obtained by aggregating the results from the instances.
The instance level information can be used for many tasks such as determining
the diversity of a classifier or algorithmically determining the optimal subset
of training instances for a learning algorithm.Comment: 7 pages, 1 figure, 6 table
- …